23 research outputs found

    Multiple Factorizations of Bivariate Linear Partial Differential Operators

    Full text link
    We study the case when a bivariate Linear Partial Differential Operator (LPDO) of orders three or four has several different factorizations. We prove that a third-order bivariate LPDO has a first-order left and right factors such that their symbols are co-prime if and only if the operator has a factorization into three factors, the left one of which is exactly the initial left factor and the right one is exactly the initial right factor. We show that the condition that the symbols of the initial left and right factors are co-prime is essential, and that the analogous statement "as it is" is not true for LPDOs of order four. Then we consider completely reducible LPDOs, which are defined as an intersection of principal ideals. Such operators may also be required to have several different factorizations. Considering all possible cases, we ruled out some of them from the consideration due to the first result of the paper. The explicit formulae for the sufficient conditions for the complete reducibility of an LPDO were found also

    Intertwining Laplace Transformations of Linear Partial Differential Equations

    Full text link
    We propose a generalization of Laplace transformations to the case of linear partial differential operators (LPDOs) of arbitrary order in R^n. Practically all previously proposed differential transformations of LPDOs are particular cases of this transformation (intertwining Laplace transformation, ILT). We give a complete algorithm of construction of ILT and describe the classes of operators in R^n suitable for this transformation. Keywords: Integration of linear partial differential equations, Laplace transformation, differential transformationComment: LaTeX, 25 pages v2: minor misprints correcte

    Comparison of the efficiency of zero and first order minimization methods in neural networks

    Get PDF
    To minimize the objective function in neural networks, first-order methods are usually used, which involve the repeated calculation of the gradient. The number of variables in modern neural networks can be many thousands and even millions. Numerous experiments show that the analytical calculation time of an N variable function’s gradient is approximately N/5 times longer than the calculation time of the function itself. The article considers the possibility of using zero-order methods to minimize the function. In particular, a new zero-order method for function minimization, descent over two-dimensional spaces, is proposed. The convergence rates of three different methods are compared: standard gradient descent with automatic step selection, coordinate descent with step selection for each coordinate, and descent over two-dimensional subspaces. It has been shown that the efficiency of properly organized zero-order methods in the considered problems of training neural networks is not lower than the gradient ones

    Laplace Invariants for General Hyperbolic Systems

    Full text link
    We consider the generalization of Laplace invariants to linear differential systems of arbitrary rank and dimension. We discuss completeness of certain subsets of invariants
    corecore